90 research outputs found

    Robustness to lighting variations: An RGB-D indoor visual odometry using line segments

    Full text link
    Abstract β€” Large lighting variation challenges all visual odometry methods, even with RGB-D cameras. Here we propose a line segment-based RGB-D indoor odometry algorithm robust to lighting variation. We know line segments are abundant indoors and less sensitive to lighting change than point fea-tures. However, depth data are often noisy, corrupted or even missing for line segments which are often found on object boundaries where significant depth discontinuities occur. Our algorithm samples depth data along line segments, and uses a random sample consensus approach to identify correct depth and estimate 3D line segments. We analyze 3D line segment uncertainties and estimate camera motion by minimizing the Mahalanobis distance. In experiments we compare our method with two state-of-the-art methods including a keypoint-based approach and a dense visual odometry algorithm, under both constant and varying lighting. Our method demonstrates su-perior robustness to lighting change by outperforming the competing methods on 6 out of 8 long indoor sequences under varying lighting. Meanwhile our method also achieves improved accuracy even under constant lighting when tested using public data. I

    Systems and algorithms for autonomously simultaneous observation of multiple objects using robotic PTZ cameras assisted by a wide-angle camera

    Full text link
    Abstract β€” We report an autonomous observation system with multiple pan-tilt-zoom (PTZ) cameras assisted by a fixed wide-angle camera. The wide-angle camera provides large but low resolution coverage and detects and tracks all moving objects in the scene. Based on the output of the wide-angle camera, the system generates spatiotemporal observation requests for each moving object, which are candidates for close-up views using PTZ cameras. Due to the fact that there are usually much more objects than the number of PTZ cameras, the system first assigns a subset of the requests/objects to each PTZ camera. The PTZ cameras then select the parameter settings that best satisfy the assigned competing requests to provide high resolution views of the moving objects. We solve the request assignment and the camera parameter selection problems in real time. The effectiveness of the proposed system is validated in comparison with an existing work using simulation. The simulation results show that in heavy traffic scenarios, our algorithm increases the number of observed objects by over 200%. I

    Toward featureless visual navigation: Simultaneous localization and planar surface extraction using motion vectors in video streams

    Full text link
    Abstract β€” Unlike the traditional feature-based methods, we propose using motion vectors (MVs) from video streams as inputs for visual navigation. Although MVs are very noisy and with low spatial resolution, MVs do possess high temporal reso-lution which means it is possible to merge MVs from different frames to improve signal quality. Homography filtering and MV thresholding are proposed to further improve MV quality so that we can establish plane observations from MVs. We propose an extended Kalman filter (EKF) based approach to simultaneously track robot motion and planes. We formally model error propagation of MVs and derive variance of the merged MVs. We have implemented the proposed method and tested it in physical experiments. Results show that the system is capable of performing robot localization and plane mapping with a relative trajectory error of less than 5.1%. I
    • …
    corecore